Goto

Collaborating Authors

 support data





cfee398643cbc3dc5eefc89334cacdc1-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for their insightful comments. A number of additional experiments were suggested. AQ produces models specifically optimized for robust25 few-shotadaptation. In Section 4.2, we note that43 perturbing support data optimizes the network for adversarial fine-tuning.


AlleviatingtheSampleSelectionBiasinFew-shot LearningbyRemovingProjectiontotheCentroid

Neural Information Processing Systems

While agood feature extractor may help cluster unseen data, thetask distribution shift between training andtesting [25] still makes it hard to estimate novel class distribution using a small number of samples from the support set. Thus, the performance is strongly correlated with the sample quality of the support data.


On sensitivity of meta-learning to support data

Neural Information Processing Systems

Meta-learning algorithms are widely used for few-shot learning. For example, image recognition systems that readily adapt to unseen classes after seeing only a few labeled examples. Despite their success, we show that modern meta-learning algorithms are extremely sensitive to the data used for adaptation, i.e. support data. In particular, we demonstrate the existence of (unaltered, in-distribution, natural) images that, when used for adaptation, yield accuracy as low as 4\% or as high as 95\% on standard few-shot image classification benchmarks. We explain our empirical findings in terms of class margins, which in turn suggests that robust and safe meta-learning requires larger margins than supervised learning.



Mind the Gap Between Prototypes and Images in Cross-domain Finetuning

Neural Information Processing Systems

In cross-domain few-shot classification (CFC), recent works mainly focus on adapting a simple transformation head on top of a frozen pre-trained backbone with few labeled data to project embeddings into a task-specific metric space where classification can be performed by measuring similarities between image instance and prototype representations.


Dynamic Lagging for Time-Series Forecasting in E-Commerce Finance: Mitigating Information Loss with A Hybrid ML Architecture

Sharma, Abhishek, Parush, Anat, Wadhwa, Sumit, Savir, Amihai, Guinard, Anne, Srivastava, Prateek

arXiv.org Artificial Intelligence

Accurate forecasting in the e-commerce finance domain is particularly challenging due to irregular invoice schedules, payment deferrals, and user-specific behavioral variability. These factors, combined with sparse datasets and short historical windows, limit the effectiveness of conventional time-series methods. While deep learning and Transformer-based models have shown promise in other domains, their performance deteriorates under partial observability and limited historical data. To address these challenges, we propose a hybrid forecasting framework that integrates dynamic lagged feature engineering and adaptive rolling-window representations with classical statistical models and ensemble learners. Our approach explicitly incorporates invoice-level behavioral modeling, structured lag of support data, and custom stability-aware loss functions, enabling robust forecasts in sparse and irregular financial settings. Empirical results demonstrate an approximate 5% reduction in MAPE compared to baseline models, translating into substantial financial savings. Furthermore, the framework enhances forecast stability over quarterly horizons and strengthens feature target correlation by capturing both short- and long-term patterns, leveraging user profile attributes, and simulating upcoming invoice behaviors. These findings underscore the value of combining structured lagging, invoice-level closure modeling, and behavioral insights to advance predictive accuracy in sparse financial time-series forecasting.